Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
Our education system comprises a series of curricula. For example, when we learn mathematics at school, we learn in order from addition, to multiplication, and later to integration. Delineating a curriculum for teaching either a human or a machine shares the underlying goal of maximizing the positive knowledge transfer from early to later tasks and minimizing forgetting of the early tasks. Here, we exhaustively surveyed the effect of curricula on existing continual learning algorithms in the class-incremental setting, where algorithms must learn classes one at a time from a continuous stream of data. We observed that across a breadth of possible class orders (curricula), curricula influence the retention of information and that this effect is not just a product of stochasticity. Further, as a primary effort toward automated curriculum design, we proposed a method capable of designing and ranking effective curricula based on inter-class feature similarities. We compared the predicted curricula against empirically determined effectual curricula and observed significant overlaps between the two. To support the study of a curriculum designer, we conducted a series of human psychophysics experiments and contributed a new Continual Learning benchmark in object recognition. We assessed the degree of agreement in effective curricula between humans and machines. Surprisingly, our curriculum designer successfully predicts an optimal set of curricula that is effective for human learning. There are many considerations in curriculum design, such as timely student feedback and learning with multiple modalities. Our study is the first attempt to set a standard framework for the community to tackle the problem of teaching humans and machines to learn to learn continuously.
translated by 谷歌翻译
The activity of the grid cell population in the medial entorhinal cortex (MEC) of the mammalian brain forms a vector representation of the self-position of the animal. Recurrent neural networks have been proposed to explain the properties of the grid cells by updating the neural activity vector based on the velocity input of the animal. In doing so, the grid cell system effectively performs path integration. In this paper, we investigate the algebraic, geometric, and topological properties of grid cells using recurrent network models. Algebraically, we study the Lie group and Lie algebra of the recurrent transformation as a representation of self-motion. Geometrically, we study the conformal isometry of the Lie group representation where the local displacement of the activity vector in the neural space is proportional to the local displacement of the agent in the 2D physical space. Topologically, the compact abelian Lie group representation automatically leads to the torus topology commonly assumed and observed in neuroscience. We then focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells. Our numerical experiments show that conformal isometry leads to hexagon periodic patterns in the grid cell responses and our model is capable of accurate path integration. Code is available at \url{https://github.com/DehongXu/grid-cell-rnn}.
translated by 谷歌翻译
自动扬声器验证(ASV)已在现实生活中广泛用于身份认证。但是,随着语音转换的快速发展,语音合成算法和记录设备质量的提高,ASV系统很容易受到欺骗攻击。近年来,有关合成和重播语音检测的许多作品,研究人员提出了许多基于手工制作的特征的反欺骗方法,以提高合成和重播语音检测系统的准确性和鲁棒性。但是,使用手工制作的功能而不是原始波形将丢失某些信息进行抗旋转,这将降低系统的检测性能。受图像分类任务中Convnext的有希望的性能的启发,我们将Convnext网络体系结构相应地扩展到SPOOF攻击任务,并提出了端到端的反欺骗模型。通过将扩展体系结构与频道注意块相结合,提出的模型可以专注于最有用的语音表示子频段,以改善反欺骗性的性能。实验表明,对于ASVSPOOF 2019 LA评估数据集和PA评估数据集,我们提出的最佳单个系统可以达到1.88%和2.79%的误差率,这证明了该模型的抗SpoFofing能力。
translated by 谷歌翻译
高阶相关性学习在数据表示学习中表现出了优越性,在近几十年来,超图已被广泛使用。基于超图的表示方法(例如HyperGraph神经网络)的性能很大程度上取决于HyperGraph结构的质量。如何在数据之间生成超图结构仍然是一项具有挑战性的任务。缺失和嘈杂的数据可能会导致超图结构中的“不良连接”,并破坏基于超图的表示过程。因此,揭示高阶结构,即观察到的数据背后的超图成为一项紧迫但重要的任务。为了解决这个问题,我们设计了深度图结构学习的一般范式,即DeepHGSL,以优化基于超图表的表示超图结构。具体地,受鲁棒性问题的信息瓶颈原则的启发,我们首先将其扩展到HyperGraph Case,该案例由HyperGraph Information Bottleneck(HIB)原理命名。然后,我们应用此原理来指导超图结构学习,其中引入HIB以构建损耗函数以最大程度地减少超图结构中的嘈杂信息。可以优化超图结构,并且可以认为该过程可以增强正确的连接并削弱训练阶段的错误连接。因此,所提出的方法即使在严重的嘈杂结构上提取更健壮的表示也有益。最后,我们在四个基准数据集上评估该模型以进行表示。与其他最新方法相比,对图形和超图结构数据的实验结果证明了我们方法的有效性和鲁棒性。
translated by 谷歌翻译
在本文中,我们提出了与IEEE计算机协会在CVPR 2022上同时与IEEE计算机协会研讨会同时举行的多手术检测挑战。我们的多手术检测挑战旨在检测自动图像操作,包括但不限于图像编辑,图像合成,图像合成,图像,图像,图像,图像合成,图像,图像编辑一代,图像Photoshop等。我们的挑战吸引了来自世界各地的674支团队,约有2000个有效的结果提交数量。我们邀请了前十支球队为挑战提供解决方案,其中三支球队在大结局中获得了奖项。在本文中,我们介绍了前三名团队的解决方案,以增强图像伪造检测领域的研究工作。
translated by 谷歌翻译
无监督的域适应性(UDA)旨在使在标记的源域上训练的模型适应未标记的目标域。在本文中,我们提出了典型的对比度适应(PROCA),这是一种无监督域自适应语义分割的简单有效的对比度学习方法。以前的域适应方法仅考虑跨各个域的阶级内表示分布的对齐,而阶层间结构关系的探索不足,从而导致目标域上的对齐表示可能不像在源上歧视的那样容易歧视。域了。取而代之的是,ProCA将类间信息纳入班级原型,并采用以班级为中心的分布对齐进行适应。通过将同一类原型与阳性和其他类原型视为实现以集体为中心的分配对齐方式的负面原型,Proca在经典领域适应任务上实现了最先进的性能,{\ em i.e. text {and} synthia $ \ to $ cityScapes}。代码可在\ href {https://github.com/jiangzhengkai/proca} {proca}获得代码
translated by 谷歌翻译
潜在空间基于能量的模型(EBM),也称为基于能量的先验,引起了对生成建模的日益兴趣。由于其在潜在空间的配方和强大的建模能力方面的灵活性所推动,最近构建的作品已经进行了有趣的尝试,目的是针对文本建模的解释性。但是,潜在空间EBM还继承了数据空间中EBM的一些缺陷。实践中退化的MCMC抽样质量会导致培训中的发电质量和不稳定差,尤其是在具有复杂潜在结构的数据上。受到最近的努力的启发,该努力利用扩散恢复的可能性学习是解决抽样问题的一种方法,我们在变异学习框架中引入了扩散模型和潜在空间EBM之间的新型共生,这是潜在扩散能量基于能量的模型。我们与信息瓶颈共同开发基于几何聚类的正则化,以进一步提高学到的潜在空间的质量。对几个具有挑战性的任务进行的实验证明了我们模型在可解释的文本建模上的优越性能而不是强大的同行。
translated by 谷歌翻译
如今,许多分类算法已应用于各个行业,以帮助他们在现实生活中解决他们的问题。但是,在许多二进制分类任务中,少数族裔类中的样本仅构成了所有实例的一小部分,这导致了我们通常患有高失衡比的数据集。现有模型有时将少数族裔类别视为噪音,或者将它们视为遇到数据偏斜的异常值。为了解决这个问题,我们提出了一个装袋合奏学习框架$ ASE $(基于异常得分的合奏学习)。该框架具有基于异常检测算法的评分系统,可以通过将多数类中的样本分为子空间来指导重采样策略。那么,特定数量的实例将从每个子空间中采样较低,以通过与少数族裔类结合来构建子集。我们根据异常检测模型的分类结果和子空间的统计数据计算由子集训练的基本分类器的权重。已经进行了实验,这表明我们的合奏学习模型可以显着提高基本分类器的性能,并且比在广泛的不平衡比率,数据量表和数据维度下的其他现有方法更有效。 $ ase $可以与各种分类器结合使用,我们的框架的每个部分都被证明是合理和必要的。
translated by 谷歌翻译
生成的对抗网络(GANS)已被证明在图像生成任务中非常成功,但GaN培训具有不稳定问题。许多作品通过手动修改GaN架构提高了GaN训练的稳定性,这需要人类专业知识和广泛的试验和错误。因此,目的是自动化模型设计的神经结构搜索(NAS)已经应用于在无条件图像生成的任务上搜索GAN。早期的NAS-GaN仅用于搜索生成器来减少困难。最近的一些作品试图搜索发电机(G)和鉴别器(D)来提高GaN性能,但它们仍然遭受搜索过程中GaN培训的不稳定性。为了缓解不稳定问题,我们提出了一种高效的两阶段进化算法(EA)基于NAS框架来发现GANS,Dubbed \ TextBF {eagan}。具体而言,我们将G和D的搜索分成两个阶段,提出了重量重置策略以提高GaN训练的稳定性。此外,我们执行进展操作以基于多个目标生成帕累托 - 前部架构,导致G和D的优越组合。通过利用重量分享策略和低保真评估,EAGAN可以显着缩短搜索时间。 EAGAN在CIFAR-10上实现了高竞争力的结果(= 8.81 $ \ PM $ 0.10,FID = 9.91),并超越了STL-10数据集上的先前NAS搜索的GAN(= 10.44 $ \ PM $ 0.087,FID = 22.18)。
translated by 谷歌翻译